6 research outputs found
Machine Learning for Smart and Energy-Efficient Buildings
Energy consumption in buildings, both residential and commercial, accounts
for approximately 40% of all energy usage in the U.S., and similar numbers are
being reported from countries around the world. This significant amount of
energy is used to maintain a comfortable, secure, and productive environment
for the occupants. So, it is crucial that the energy consumption in buildings
must be optimized, all the while maintaining satisfactory levels of occupant
comfort, health, and safety. Recently, Machine Learning has been proven to be
an invaluable tool in deriving important insights from data and optimizing
various systems. In this work, we review the ways in which machine learning has
been leveraged to make buildings smart and energy-efficient. For the
convenience of readers, we provide a brief introduction of several machine
learning paradigms and the components and functioning of each smart building
system we cover. Finally, we discuss challenges faced while implementing
machine learning algorithms in smart buildings and provide future avenues for
research at the intersection of smart buildings and machine learning
Deep reinforcement learning with planning guardrails for building energy demand response
Building energy demand response is projected to be important in decarbonizing energy use. A demand response program that communicates “artificial” hourly price signals to workers as part of a social game has the potential to elicit energy consumption changes that simultaneously reduce energy costs and emissions. The efficacy of such a program depends on the pricing agent’s ability to learn how workers respond to prices and mitigate the risk of high energy costs during this learning process. We assess the value of deep reinforcement learning (RL) for mitigating this risk. Specifically, we explore the value of combining: (i) a model-free RL method that can learn by posting price signals to workers, (ii) a supervisory “planning model” that provides a synthetic learning environment, and (iii) a guardrail method that determines whether a price should be posted to real workers or the planning environment for feedback. In a simulated medium-sized office building, we compare our pricing agent against existing model-free and model-based deep RL agents, and the simpler strategy of passing on the time-of-use price signal to workers. We find that our controller eliminates 175,000 US Dollars in initial investment, decreases by 30% the energy cost, and curbs emissions by 32% compared to energy consumption under the time-of-use rate. In contrast, the model-free and model-based deep RL benchmarks are unable to overcome initial learning costs. Our results bode well for risk-aware deep RL facilitating the deployment of building demand response